全文获取类型
收费全文 | 98185篇 |
免费 | 5407篇 |
国内免费 | 4521篇 |
专业分类
电工技术 | 8433篇 |
技术理论 | 5篇 |
综合类 | 9276篇 |
化学工业 | 12991篇 |
金属工艺 | 5733篇 |
机械仪表 | 3670篇 |
建筑科学 | 4496篇 |
矿业工程 | 1516篇 |
能源动力 | 2963篇 |
轻工业 | 5963篇 |
水利工程 | 2062篇 |
石油天然气 | 4237篇 |
武器工业 | 744篇 |
无线电 | 7840篇 |
一般工业技术 | 14612篇 |
冶金工业 | 2634篇 |
原子能技术 | 2233篇 |
自动化技术 | 18705篇 |
出版年
2024年 | 85篇 |
2023年 | 369篇 |
2022年 | 566篇 |
2021年 | 808篇 |
2020年 | 1236篇 |
2019年 | 1163篇 |
2018年 | 1254篇 |
2017年 | 1358篇 |
2016年 | 1859篇 |
2015年 | 2586篇 |
2014年 | 4491篇 |
2013年 | 5230篇 |
2012年 | 4686篇 |
2011年 | 5387篇 |
2010年 | 4534篇 |
2009年 | 5954篇 |
2008年 | 5929篇 |
2007年 | 6356篇 |
2006年 | 5772篇 |
2005年 | 4773篇 |
2004年 | 4125篇 |
2003年 | 3968篇 |
2002年 | 3947篇 |
2001年 | 2999篇 |
2000年 | 3324篇 |
1999年 | 3067篇 |
1998年 | 2556篇 |
1997年 | 2435篇 |
1996年 | 2586篇 |
1995年 | 2716篇 |
1994年 | 2459篇 |
1993年 | 1496篇 |
1992年 | 1522篇 |
1991年 | 1048篇 |
1990年 | 775篇 |
1989年 | 679篇 |
1988年 | 648篇 |
1987年 | 381篇 |
1986年 | 226篇 |
1985年 | 375篇 |
1984年 | 418篇 |
1983年 | 438篇 |
1982年 | 329篇 |
1981年 | 407篇 |
1980年 | 274篇 |
1979年 | 116篇 |
1978年 | 114篇 |
1977年 | 70篇 |
1976年 | 41篇 |
1975年 | 56篇 |
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
91.
Based on multiphase field conception and integrated with the idea of vectorvalued phase field, a phase field model for typical allotropic transformation of solid
solution is proposed. The model takes the non-uniform distribution of grain boundaries of
parent phase and crystal orientation into account in proper way, as being illustrated by the
simulation of austenite to ferrite transformation in low carbon steel. It is found that the
misorientation dependent grain boundary mobility shows strong influence on the
formation of ferrite morphology comparing with the weak effect exerted by
misorientation dependent grain boundary energy. The evolution of various types of grain
boundaries are quantitatively characterized in terms of its respective grain boundary
energy dissipation. The simulated ferrite fraction agrees well with the expectation from
phase diagram, which verifies this model. 相似文献
92.
Yihe Liu Aaqif Afzaal Abbasi Atefeh Aghaei Almas Abbasi Amir Mosavi Shahaboddin Shamshirband Mohammed A. A. Al-qaness 《计算机、材料和连续体(英文)》2020,63(1):31-61
Mobile cloud computing is an emerging field that is gaining popularity across borders at a rapid pace. Similarly, the field of health informatics is also considered as an extremely important field. This work observes the collaboration between these two fields to solve the traditional problem of extracting Electrocardiogram signals from trace reports and then performing analysis. The developed system has two front ends, the first dedicated for the user to perform the photographing of the trace report. Once the photographing is complete, mobile computing is used to extract the signal. Once the signal is extracted, it is uploaded into the server and further analysis is performed on the signal in the cloud. Once this is done, the second interface, intended for the use of the physician, can download and view the trace from the cloud. The data is securely held using a password-based authentication method. The system presented here is one of the first attempts at delivering the total solution, and after further upgrades, it will be possible to deploy the system in a commercial setting. 相似文献
93.
With the popularity of sensor-rich mobile devices, mobile crowdsensing (MCS) has emerged as an effective method for data collection and processing. However, MCS platform usually need workers’ precise locations for optimal task execution and collect sensing data from workers, which raises severe concerns of privacy leakage. Trying to preserve workers’ location and sensing data from the untrusted MCS platform, a differentially private data aggregation method based on worker partition and location obfuscation (DP-DAWL method) is proposed in the paper. DP-DAWL method firstly use an improved K-means algorithm to divide workers into groups and assign different privacy budget to the group according to group size (the number of workers). Then each worker’s location is obfuscated and his/her sensing data is perturbed by adding Laplace noise before uploading to the platform. In the stage of data aggregation, DP-DAWL method adopts an improved Kalman filter algorithm to filter out the added noise (including both added noise of sensing data and the system noise in the sensing process). Through using optimal estimation of noisy aggregated sensing data, the platform can finally gain better utility of aggregated data while preserving workers’ privacy. Extensive experiments on the synthetic datasets demonstrate the effectiveness of the proposed method. 相似文献
94.
This paper examines the causal relationship between oil prices and the Gross
Domestic Product (GDP) in the Kingdom of Saudi Arabia. The study is carried out by a
data set collected quarterly, by Saudi Arabian Monetary Authority, over a period from
1974 to 2016. We seek how a change in real crude oil price affects the GDP of KSA.
Based on a new technique, we treat this data in its continuous path. Precisely, we analyze
the causality between these two variables, i.e., oil prices and GDP, by using their yearly
curves observed in the four quarters of each year. We discuss the causality in the sense of
Granger, which requires the stationarity of the data. Thus, in the first Step, we test the
stationarity by using the Monte Carlo test of a functional time series stationarity. Our
main goal is treated in the second step, where we use the functional causality idea to
model the co-variability between these variables. We show that the two series are not
integrated; there is one causality between these two variables. All the statistical analyzes
were performed using R software. 相似文献
95.
As an unsupervised learning method, stochastic competitive learning is
commonly used for community detection in social network analysis. Compared with the
traditional community detection algorithms, it has the advantage of realizing the timeseries community detection by simulating the community formation process. In order to
improve the accuracy and solve the problem that several parameters in stochastic
competitive learning need to be pre-set, the author improves the algorithms and realizes
improved stochastic competitive learning by particle position initialization, parameter
optimization and particle domination ability self-adaptive. The experiment result shows
that each improved method improves the accuracy of the algorithm, and the F1 score of
the improved algorithm is 9.07% higher than that of original algorithm. 相似文献
96.
Jae-Hyun Ro Won-Seok Lee Min-Goo Kang Dae-Ki Hong Hyoung-Kyu Song 《计算机、材料和连续体(英文)》2020,64(1):181-191
In this paper, the supervised Deep Neural Network (DNN) based signal detection
is analyzed for combating with nonlinear distortions efficiently and improving error
performances in clipping based Orthogonal Frequency Division Multiplexing (OFDM)
ssystem. One of the main disadvantages for the OFDM is the high Peak to Average Power
Ratio (PAPR). The clipping is a simple method for the PAPR reduction. However, an effect
of the clipping is nonlinear distortion, and estimations for transmitting symbols are difficult
despite a Maximum Likelihood (ML) detection at the receiver. The DNN based online
signal detection uses the offline learning model where all weights and biases at fullyconnected layers are set to overcome nonlinear distortions by using training data sets. Thus,
this paper introduces the required processes for the online signal detection and offline
learning, and compares error performances with the ML detection in the clipping-based
OFDM systems. In simulation results, the DNN based signal detection has better error
performance than the conventional ML detection in multi-path fading wireless channel. The
performance improvement is large as the complexity of system is increased such as huge
Multiple Input Multiple Output (MIMO) system and high clipping rate. 相似文献
97.
Amer Ibrahim Al-Omari Ibrahim M. Almanjahie Amal S. Hassan Heba F. Nagy 《计算机、材料和连续体(英文)》2020,64(2):835-857
In reliability analysis, the stress-strength model is often used to describe the life of
a component which has a random strength (X) and is subjected to a random stress (Y). In this
paper, we considered the problem of estimating the reliability R=P [Y<X] when the
distributions of both stress and strength are independent and follow exponentiated Pareto
distribution. The maximum likelihood estimator of the stress strength reliability is calculated
under simple random sample, ranked set sampling and median ranked set sampling methods.
Four different reliability estimators under median ranked set sampling are derived. Two
estimators are obtained when both strength and stress have an odd or an even set size. The
two other estimators are obtained when the strength has an odd size and the stress has an
even set size and vice versa. The performances of the suggested estimators are compared
with their competitors under simple random sample via a simulation study. The simulation
study revealed that the stress strength reliability estimates based on ranked set sampling and
median ranked set sampling are more efficient than their competitors via simple random
sample. In general, the stress strength reliability estimates based on median ranked set
sampling are smaller than the corresponding estimates under ranked set sampling and simple
random sample methods. 相似文献
98.
In this article, a new generalization of the inverse Lindley distribution is
introduced based on Marshall-Olkin family of distributions. We call the new distribution,
the generalized Marshall-Olkin inverse Lindley distribution which offers more flexibility
for modeling lifetime data. The new distribution includes the inverse Lindley and the
Marshall-Olkin inverse Lindley as special distributions. Essential properties of the
generalized Marshall-Olkin inverse Lindley distribution are discussed and investigated
including, quantile function, ordinary moments, incomplete moments, moments of
residual and stochastic ordering. Maximum likelihood method of estimation is considered
under complete, Type-I censoring and Type-II censoring. Maximum likelihood estimators
as well as approximate confidence intervals of the population parameters are discussed.
A comprehensive simulation study is done to assess the performance of estimates based
on their biases and mean square errors. The notability of the generalized Marshall-Olkin
inverse Lindley model is clarified by means of two real data sets. The results showed the
fact that the generalized Marshall-Olkin inverse Lindley model can produce better fits
than power Lindley, extended Lindley, alpha power transmuted Lindley, alpha power
extended exponential and Lindley distributions. 相似文献
99.
Due to its outstanding ability in processing large quantity and high-dimensional
data, machine learning models have been used in many cases, such as pattern recognition,
classification, spam filtering, data mining and forecasting. As an outstanding machine
learning algorithm, K-Nearest Neighbor (KNN) has been widely used in different situations,
yet in selecting qualified applicants for winning a funding is almost new. The major problem
lies in how to accurately determine the importance of attributes. In this paper, we propose a
Feature-weighted Gradient Decent K-Nearest Neighbor (FGDKNN) method to classify
funding applicants in to two types: approved ones or not approved ones. The FGDKNN is
based on a gradient decent learning algorithm to update weight. It updatesthe weight of labels
by minimizing error ratio iteratively, so that the importance of attributes can be described
better. We investigate the performance of FGDKNN with Beijing Innofund. The results show
that FGDKNN performs about 23%, 20%, 18%, 15% better than KNN, SVM, DT and ANN,
respectively. Moreover, the FGDKNN has fast convergence time under different training
scales, and has good performance under different settings. 相似文献
100.
Host cardinality estimation is an important research field in network
management and network security. The host cardinality estimation algorithm based on
the linear estimator array is a common method. Existing algorithms do not take memory
footprint into account when selecting the number of estimators used by each host. This
paper analyzes the relationship between memory occupancy and estimation accuracy and
compares the effects of different parameters on algorithm accuracy. The cardinality
estimating algorithm is a kind of random algorithm, and there is a deviation between the
estimated results and the actual cardinalities. The deviation is affected by some
systematical factors, such as the random parameters inherent in linear estimator and the
random functions used to map a host to different linear estimators. These random factors
cannot be reduced by merging multiple estimators, and existing algorithms cannot
remove the deviation caused by such factors. In this paper, we regard the estimation
deviation as a random variable and proposed a sampling method, recorded as the linear
estimator array step sampling algorithm (L2S), to reduce the influence of the random
deviation. L2S improves the accuracy of the estimated cardinalities by evaluating and
remove the expected value of random deviation. The cardinality estimation algorithm
based on the estimator array is a computationally intensive algorithm, which takes a lot of
time when processing high-speed network data in a serial environment. To solve this
problem, a method is proposed to port the cardinality estimating algorithm based on the
estimator array to the Graphics Processing Unit (GPU). Experiments on real-world highspeed network traffic show that L2S can reduce the absolute bias by more than 22% on
average, and the extra time is less than 61 milliseconds on average. 相似文献